China Wants to Regulate “Human-Like” AI — And It’s a Sign of What’s Coming Next
When AI starts to feel human, governments start to worry.
In late December 2025, China’s top internet regulator, the Cyberspace Administration of China (CAC), released draft rules targeting “human-like” AI systems — tools designed to simulate personality, emotion, or companionship. (Source: Reuters, Dec 27, 2025) 👉 https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/
These are not just smarter chatbots. They are systems designed to build emotional rapport with users — and that changes the regulatory equation.
What Counts as “Human-Like” AI?
According to the draft, the rules apply to AI services that simulate human traits such as:
- personality and identity
- emotional feedback and empathy
- long-term conversational interaction
This includes AI companions, emotionally responsive chatbots, and virtual avatars offered to the public. (Source: Bloomberg) 👉 https://www.bloomberg.com/news/articles/2025-12-27/china-issues-draft-rules-to-govern-use-of-human-like-ai-systems
The concern is not intelligence — it’s attachment.
Timeline: How China’s AI Regulation Reached This Point
2021 ─ Algorithm Recommendation Rules
(content feeds & behavioral manipulation)
2023 ─ Interim Measures for Generative AI
(public-facing chatbots & generators)
2024 ─ Deepfake & Synthetic Media Rules
(identity protection & labeling)
2025 ─ Draft Rules for Human-Like AI
(emotion, dependency, psychological impact)
Each regulatory step moves closer to governing human–AI interaction, not just outputs.
Background context: 👉 https://en.wikipedia.org/wiki/Interim_Measures_for_the_Management_of_Generative_AI_Services
The Core Principle
If AI feels human, it must be governed like something that affects human psychology.
This idea runs through the entire draft.
Key Requirements Explained
1. Mandatory Disclosure: “This Is an AI”
AI providers must clearly and repeatedly inform users that they are interacting with an AI system — not a human:
- at login
- during extended conversations
- when users show signs of emotional reliance
This requirement directly targets the illusion of humanity. (Source: Bloomberg) 👉 https://www.bloomberg.com/news/articles/2025-12-27/china-issues-draft-rules-to-govern-use-of-human-like-ai-systems
2. Monitoring Emotional Dependency
One of the most striking elements of the draft: providers are expected to monitor emotional risk.
If systems detect:
- excessive usage
- emotional dependence
- addiction-like behavior
they must intervene with warnings or usage guidance. (Source: Channel NewsAsia) 👉 https://www.channelnewsasia.com/east-asia/china-issues-drafts-rules-regulate-ai-human-interaction-5724576
This effectively makes AI companies partially responsible for user mental well-being.
3. Content and Ideological Guardrails
As with earlier AI regulations, human-like AI systems must not generate content that:
- threatens national security
- spreads misinformation or rumors
- promotes violence, obscenity, or illegal activity
- undermines social order or personal dignity
The draft also reiterates alignment with “core socialist values.” (Source: China Daily) 👉 https://global.chinadaily.com.cn/a/202512/27/WS694fd34aa310d6866eb30c4f.html
4. Full Lifecycle Accountability
Developers are responsible for the entire AI product lifecycle, including:
- algorithm safety reviews
- data protection and privacy
- security assessments before launch
- compliance reporting to regulators
Large-scale deployments may require additional filings with provincial authorities. (Source: Reuters) 👉 https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/
Chart: How China’s Approach Compares Globally
| Dimension | China (Draft 2025) | EU AI Act | United States |
|---|---|---|---|
| Emotional Dependency | Explicitly regulated | Limited / indirect | Largely unregulated |
| AI Identity Disclosure | Mandatory & repeated | Required in some cases | Mostly voluntary |
| Psychological Harm | Provider responsibility | Risk-based | Consumer law |
| Ideological Alignment | Required | Not applicable | Not applicable |
| Regulatory Style | Pre-emptive | Risk classification | Reactive |
EU reference: 👉 https://artificialintelligenceact.eu/
Why China Is Acting Now
1. Emotional risk is the next AI risk As AI companions scale, psychological effects become harder to ignore.
2. Human-like AI is reaching mass adoption Waiting for scandals would be costlier than regulating early.
3. Governance before crisis China’s regulatory model prioritizes prevention over reaction.
What This Means for AI Builders
For teams building conversational or companion-style AI, this draft is a preview of what regulators elsewhere may soon expect:
- explicit AI identity signaling
- UX that avoids emotional manipulation
- dependency-aware interaction design
- stronger accountability for downstream harm
Human-like AI now implies human-level responsibility.
The Bigger Picture
For years, AI regulation focused on what models say.
China’s draft focuses on what models make people feel.
That shift won’t stay local.
As AI moves from tools to companions, regulators worldwide will face the same question:
How human should AI be allowed to feel — before it needs human-level rules?